# Mathematical reasoning
Openthinker3 7B GGUF
Apache-2.0
OpenThinker3-7B-GGUF is a quantized version of open-thoughts/OpenThinker3-7B, optimized for efficient inference. It is fine-tuned based on Qwen/Qwen2.5-7B-Instruct and performs excellently on mathematical, code, and scientific problems.
Large Language Model
Transformers

O
QuantFactory
114
2
Chinda Qwen3 4b Gguf
Apache-2.0
Chinda LLM 4B is a cutting-edge Thai model launched by iApp Technology, built on the Qwen3-4B architecture, bringing advanced thinking capabilities to the Thai AI ecosystem.
Large Language Model
C
iapp
115
1
Spec T1 RL 7B
MIT
Spec-T1-RL-7B is a high-precision large language model focused on mathematical reasoning, algorithm problem-solving, and code generation, and it performs excellently in technical benchmark tests.
Large Language Model
Safetensors English
S
SVECTOR-CORPORATION
4,626
6
Phi 4 Mini Reasoning MLX 4bit
MIT
This is a 4-bit quantized version in MLX format converted from the Microsoft Phi-4-mini-reasoning model, suitable for text generation tasks.
Large Language Model
P
lmstudio-community
72.19k
2
Phi 4 Reasoning GGUF
MIT
Phi-4-reasoning is an advanced reasoning model fine-tuned based on Phi-4. Through supervised fine-tuning and reinforcement learning, it demonstrates excellent reasoning abilities in fields such as mathematics, science, and coding.
Large Language Model
Transformers

P
unsloth
6,046
7
Phi 4 Mini Reasoning GGUF
MIT
Phi-4-mini-reasoning is a lightweight open model based on synthetic data, focusing on high-quality, dense reasoning data, and further fine-tuned to enhance mathematical reasoning capabilities.
Large Language Model Supports Multiple Languages
P
unsloth
21.71k
27
Deepseek Prover V2 671B
An open-source large language model designed for Lean 4 formal theorem proving, collecting data through recursive theorem proving processes, combining informal and formal mathematical reasoning.
Large Language Model
Transformers

D
deepseek-ai
9,693
773
Phi 4 Mini Reasoning
MIT
Phi-4-mini-reasoning is a lightweight open-source model that focuses on high-quality, dense reasoning data and is further fine-tuned to obtain more advanced mathematical reasoning capabilities.
Large Language Model
Transformers Supports Multiple Languages

P
microsoft
18.93k
152
Acemath RL Nemotron 7B
Other
A deep learning-based automatic math problem-solving system supporting various types of math questions including algebra, geometry, and calculus
Large Language Model
Transformers English

A
nvidia
2,990
16
Openmath Nemotron 7B
OpenMath-Nemotron-7B is a mathematical reasoning model fine-tuned on the OpenMathReasoning dataset based on Qwen2.5-Math-7B, achieving state-of-the-art results on multiple mathematical benchmarks.
Large Language Model
Transformers English

O
nvidia
153
6
Turkish Gemma 9b V0.1
Turkish-Gemma-9b-v0.1 is a Turkish text generation model developed based on Gemma-2-9b, optimized through continued pretraining, supervised fine-tuning (SFT), direct preference optimization (DPO), and model merging techniques.
Large Language Model
Safetensors
T
ytu-ce-cosmos
167
18
Nova 0.5 E3 7B
Apache-2.0
Nova 0.5 e3 is a 7B-parameter text generation model that demonstrates astonishing emergent properties, particularly excelling in mathematical reasoning.
Large Language Model
Transformers English

N
oscar128372
90
2
Light R1 32B DS
Apache-2.0
Light-R1-32B-DS is a near-SOTA level 32B mathematical model, fine-tuned based on DeepSeek-R1-Distill-Qwen-32B, achieving high performance with only 3K SFT data.
Large Language Model
Transformers

L
qihoo360
1,136
13
Qwq 32B FP8 Dynamic
MIT
FP8 quantized version of QwQ-32B, reducing storage and memory requirements by 50% through dynamic quantization while maintaining 99.75% of the original model accuracy
Large Language Model
Transformers

Q
nm-testing
3,895
3
Qwq 32B FP8 Dynamic
MIT
FP8 quantized version of QwQ-32B, reducing storage and memory requirements by 50% through dynamic quantization while maintaining 99.75% of the original model accuracy
Large Language Model
Transformers

Q
RedHatAI
3,107
8
Lucie 7B Instruct V1.1
Apache-2.0
A multilingual causal language model fine-tuned based on Lucie-7B, supporting French and English, focused on instruction following and text generation tasks.
Large Language Model Supports Multiple Languages
L
OpenLLM-France
13.33k
9
Deepseek R1 Bf16
MIT
DeepSeek-R1 is the first-generation inference model, which performs excellently in mathematics, code, and reasoning tasks, and its performance is comparable to that of OpenAI-o1.
Large Language Model
Transformers

D
opensourcerelease
1,486
16
Acemath 72B Instruct
AceMath is a series of cutting-edge models designed specifically for mathematical reasoning. It is improved based on Qwen and is good at using chain-of-thought (CoT) reasoning to solve English mathematical problems.
Large Language Model
Safetensors English
A
nvidia
3,141
18
Gemma 2 9b Neogenesis Ita
A fine-tuned version based on VAGOsolutions/SauerkrautLM-gemma-2-9b-it, optimized for Italian language performance with support for 8k context length.
Large Language Model
Transformers Supports Multiple Languages

G
anakin87
3,029
10
Qwen2 1.5B Ita
Apache-2.0
Qwen2 1.5B is a compact language model specifically optimized for Italian, with performance close to ITALIA (iGenius) but 6 times smaller in size.
Large Language Model
Transformers Supports Multiple Languages

Q
DeepMount00
6,220
21
Internlm2 Math Plus 1 8b
Other
InternLM-Math-Plus is the most advanced bilingual open-source large language model for mathematical reasoning, with functions such as solving, proving, verifying, and enhancing, providing strong support for the field of mathematical reasoning.
Large Language Model
Transformers Supports Multiple Languages

I
internlm
437
11
Yi 1.5 6B Chat
Apache-2.0
Yi-1.5 is an upgraded version of the Yi model, excelling in programming, mathematics, reasoning, and instruction-following capabilities while maintaining outstanding language understanding, commonsense reasoning, and reading comprehension abilities.
Large Language Model
Transformers

Y
01-ai
13.32k
42
Yi 1.5 9B
Apache-2.0
Yi-1.5 is an upgraded version of the Yi model, excelling in programming, mathematics, reasoning, and instruction-following capabilities while maintaining excellent language understanding, commonsense reasoning, and reading comprehension.
Large Language Model
Transformers

Y
01-ai
6,140
48
Yi 1.5 9B Chat
Apache-2.0
Yi-1.5 is an upgraded version of the Yi model, excelling in programming, mathematics, reasoning, and instruction-following capabilities while maintaining outstanding language understanding, commonsense reasoning, and reading comprehension.
Large Language Model
Transformers

Y
01-ai
17.16k
143
Llama 3 Bophades V3 8B
Other
A DPO fine-tuned model based on Llama-3-8b, focused on enhancing truthfulness and mathematical reasoning capabilities
Large Language Model
Transformers

L
nbeerbower
44
3
Openmath CodeLlama 7b Python Hf
The OpenMath model is specifically designed for solving mathematical problems by integrating textual reasoning with Python interpreter-executed code blocks. Trained on the OpenMathInstruct-1 dataset containing 1.8 million math problem-solution pairs.
Large Language Model
Transformers Supports Multiple Languages

O
nvidia
83
7
Piccolo Math 2x7b
MIT
Piccolo-math-2x7b is a large language model specializing in mathematical and logical reasoning, named in honor of the author's pet dog Klaus. The model demonstrates outstanding performance across multiple benchmarks, particularly in mathematical and code generation tasks.
Large Language Model
Transformers

P
macadeliccc
87
2
Featured Recommended AI Models